Search for a command to run...
In this episode, Sarah Guo and Elad Gil discuss their predictions for 2026, focusing on AI trends including foundation models, robotics, self-driving technologies, IPOs, consumer AI innovation, and the potential breakthroughs in various industries like defense, healthcare, and drug discovery.
Dwarkesh interviews Ilya Sutskever about the challenges of scaling AI, exploring why current models perform well on benchmarks but struggle with real-world generalization, and discussing potential paths to developing safe and beneficial superintelligent AI.
A wide-ranging discussion of AI, robotics, health, and potential alien technologies, covering everything from AGI timelines and job automation to protein folding breakthroughs, humanoid robots, and the possibility of extraterrestrial intelligence.
Mark Chen, OpenAI's Chief Research Officer, discusses the company's research priorities, talent recruitment, competitive landscape in AI, and his optimistic view on the potential of AI to drive scientific discovery and potentially reach AGI within the next few years.
A deep dive into NVIDIA's defensive tweet about Google's TPUs, OpenAI's potential funding challenges, and the mysterious revenue plans of ex-OpenAI chief scientist Ilya Sustkever's new AI startup.
In a wide-ranging interview, Łukasz Kaiser, a key architect of modern AI, explains why AI progress continues to advance smoothly, highlighting the shift from pre-training to reasoning models and the potential of multimodal AI, robots, and generalization.
In this episode, Ilya Sutskever discusses SSI's research approach, the challenges of AI generalization, and the potential for developing superintelligent AI that cares about sentient life through continual learning and incremental deployment.
Philip Clark of Thrive Capital discusses the firm's concentrated investment strategy across groundbreaking companies like OpenAI, Cursor, Wiz, Nudge, and Physical Intelligence, highlighting their focus on transformative technologies in AI, hardware, and emerging domains like brain engineering.
Aidan Gomez, co-founder and CEO of Cohere, discusses the transformative potential of AI in enterprise, reflecting on his journey from Google Brain researcher to building an AI platform focused on deploying large language models across critical industries.
In this episode, Eugenia Kuyda discusses how personal software will transform from a developer monopoly to a creative medium where anyone can create, remix, and share mini-apps as easily as posting a video, focusing on deep personalization and making AI interfaces more intuitive and accessible.
A deep dive into how OpenAI's VP of Research Jerry Tworek thinks about AI reasoning, reinforcement learning, and the path to AGI through pre-training and scaled reinforcement learning techniques.
A deep dive into Sam Altman's journey with OpenAI, exploring its transformation from a nonprofit vision to a Microsoft-backed AI powerhouse, including the dramatic 2023 board firing and the complex ethical questions surrounding artificial general intelligence.
Joe Hudson, an executive coach working with AI research teams, shares insights into the psychological and emotional landscape of AI developers, emphasizing the importance of understanding their motivations, concerns about humanity's future, and the need for supportive rather than shameful engagement with those building transformative AI technologies.
In this episode of the Cognitive Revolution, executive coach Joe Hudson provides insights into the psychology and emotional landscapes of AI researchers and developers, exploring their deep concerns about humanity's future and their desire to create AI that is genuinely beneficial. Hudson emphasizes the importance of supporting and encouraging these innovators, arguing that understanding their emotional processes and motivations is crucial to guiding AI development in a positive direction.
Dr. Roman Yampolskiy, a computer science professor and AI safety expert, warns that artificial general intelligence (AGI) could arrive by 2027, potentially leading to 99% unemployment and posing an existential threat to humanity. He argues that we cannot control superintelligent AI and that its development could result in human extinction, while also discussing his belief that we are likely living in a simulation created by a more advanced intelligence.